Goto

Collaborating Authors

 data length


Overcoming error-in-variable problem in data-driven model discovery by orthogonal distance regression

Fung, Lloyd

arXiv.org Machine Learning

Despite the recent proliferation of machine learning methods like SINDy that promise automatic discovery of governing equations from time-series data, there remain significant challenges to discovering models from noisy datasets. One reason is that the linear regression underlying these methods assumes that all noise resides in the training target (the regressand), which is the time derivative, whereas the measurement noise is in the states (the regressors). Recent methods like modified-SINDy and DySMHO address this error-in-variable problem by leveraging information from the model's temporal evolution, but they are also imposing the equation as a hard constraint, which effectively assumes no error in the regressand. Without relaxation, this hard constraint prevents assimilation of data longer than Lyapunov time. Instead, the fulfilment of the model equation should be treated as a soft constraint to account for the small yet critical error introduced by numerical truncation. The uncertainties in both the regressor and the regressand invite the use of orthogonal distance regression (ODR). By incorporating ODR with the Bayesian framework for model selection, we introduce a novel method for model discovery, termed ODR-BINDy, and assess its performance against current SINDy variants using the Lorenz63, Rossler, and Van Der Pol systems as case studies. Our findings indicate that ODR-BINDy consistently outperforms all existing methods in recovering the correct model from sparse and noisy datasets. For instance, our ODR-BINDy method reliably recovers the Lorenz63 equation from data with noise contamination levels of up to 30%.


On-Device Training of PV Power Forecasting Models in a Smart Meter for Grid Edge Intelligence

Huang, Jian, Zhu, Yongli, Xu, Linna, Zheng, Zhe, Cui, Wenpeng, Sun, Mingyang

arXiv.org Artificial Intelligence

In this paper, an edge-side model training study is conducted on a resource-limited smart meter. The motivation of grid-edge intelligence and the concept of on-device training are introduced. Then, the technical preparation steps for on-device training are described. A case study on the task of photovoltaic power forecasting is presented, where two representative machine learning models are investigated: a gradient boosting tree model and a recurrent neural network model. To adapt to the resource-limited situation in the smart meter, "mixed"- and "reduced"-precision training schemes are also devised. Experiment results demonstrate the feasibility of economically achieving grid-edge intelligence via the existing advanced metering infrastructures.


Overfitting has a limitation: a model-independent generalization error bound based on Rényi entropy

Suzuki, Atsushi

arXiv.org Machine Learning

Will further scaling up of machine learning models continue to bring success? A significant challenge in answering this question lies in understanding generalization error, which is the impact of overfitting. Understanding generalization error behavior of increasingly large-scale machine learning models remains a significant area of investigation, as conventional analyses often link error bounds to model complexity, failing to fully explain the success of extremely large architectures. This research introduces a novel perspective by establishing a model-independent upper bound for generalization error applicable to algorithms whose outputs are determined solely by the data's histogram, such as empirical risk minimization or gradient-based methods. Crucially, this bound is shown to depend only on the Rényi entropy of the data-generating distribution, suggesting that a small generalization error can be maintained even with arbitrarily large models, provided the data quantity is sufficient relative to this entropy. This framework offers a direct explanation for the phenomenon where generalization performance degrades significantly upon injecting random noise into data, where the performance degrade is attributed to the consequent increase in the data distribution's Rényi entropy. Furthermore, we adapt the no-free-lunch theorem to be data-distribution-dependent, demonstrating that an amount of data corresponding to the Rényi entropy is indeed essential for successful learning, thereby highlighting the tightness of our proposed generalization bound.


Crypto-ncRNA: Non-coding RNA (ncRNA) Based Encryption Algorithm

Wang, Xu, Wang, Yiquan, Huang, Tin-yeh

arXiv.org Artificial Intelligence

A BSTRACT In the looming post-quantum era, traditional cryptographic systems are increasingly vulnerable to quantum computing attacks that can compromise their mathematical foundations. To address this critical challenge, we propose crypto-ncRNA--a bio-convergent cryptographic framework that leverages the dynamic folding properties of non-coding RNA (ncRNA) to generate high-entropy, quantum-resistant keys and produce unpredictable ciphertexts. The framework employs a novel, multi-stage process: encoding plaintext into RNA sequences, predicting and manipulating RNA secondary structures using advanced algorithms, and deriving cryptographic keys through the intrinsic physical unclonability of RNA molecules. Experimental evaluations indicate that, although cryptoncRNA's encryption speed is marginally lower than that of AES, it significantly outperforms RSA in terms of efficiency and scalability while achieving a 100% pass rate on the NIST SP 800-22 randomness tests. These results demonstrate that crypto-ncRNA offers a promising and robust approach for securing digital infrastructures against the evolving threats posed by quantum computing. Moreover, with the rapid advancement of artificial intelligence, RNA-based research has gradually unfolded into a new realm of innovation (Townshend et al. (2021)). Recent studies showed that the dynamic folding processes of RNA molecules intrinsically exhibit physical unclonable functions (PUFs) characteristics (Herder et al. (2014); Li et al. (2022); Luescher et al. (2024); Zhou et al. (2021)), thereby establishing a pathway for designing post-quantum cryptography (PQC) systems (Arapinis et al. (2021); Cambou et al. (2021)).


iFuzzyTL: Interpretable Fuzzy Transfer Learning for SSVEP BCI System

Jiang, Xiaowei, Cao, Beining, Ou, Liang, Chang, Yu-Cheng, Do, Thomas, Lin, Chin-Teng

arXiv.org Artificial Intelligence

The rapid evolution of Brain-Computer Interfaces (BCIs) has significantly influenced the domain of human-computer interaction, with Steady-State Visual Evoked Potentials (SSVEP) emerging as a notably robust paradigm. This study explores advanced classification techniques leveraging interpretable fuzzy transfer learning (iFuzzyTL) to enhance the adaptability and performance of SSVEP-based systems. Recent efforts have strengthened to reduce calibration requirements through innovative transfer learning approaches, which refine cross-subject generalizability and minimize calibration through strategic application of domain adaptation and few-shot learning strategies. Pioneering developments in deep learning also offer promising enhancements, facilitating robust domain adaptation and significantly improving system responsiveness and accuracy in SSVEP classification. However, these methods often require complex tuning and extensive data, limiting immediate applicability. iFuzzyTL introduces an adaptive framework that combines fuzzy logic principles with neural network architectures, focusing on efficient knowledge transfer and domain adaptation. iFuzzyTL refines input signal processing and classification in a human-interpretable format by integrating fuzzy inference systems and attention mechanisms. This approach bolsters the model's precision and aligns with real-world operational demands by effectively managing the inherent variability and uncertainty of EEG data. The model's efficacy is demonstrated across three datasets: 12JFPM (89.70% accuracy for 1s with an information transfer rate (ITR) of 149.58), Benchmark (85.81% accuracy for 1s with an ITR of 213.99), and eldBETA (76.50% accuracy for 1s with an ITR of 94.63), achieving state-of-the-art results and setting new benchmarks for SSVEP BCI performance.


STAA: Spatio-Temporal Alignment Attention for Short-Term Precipitation Forecasting

Chen, Min, Yang, Hao, Li, Shaohan, Qin, Xiaolin

arXiv.org Artificial Intelligence

There is a great need to accurately predict short-term precipitation, which has socioeconomic effects such as agriculture and disaster prevention. Recently, the forecasting models have employed multi-source data as the multi-modality input, thus improving the prediction accuracy. However, the prevailing methods usually suffer from the desynchronization of multi-source variables, the insufficient capability of capturing spatio-temporal dependency, and unsatisfactory performance in predicting extreme precipitation events. To fix these problems, we propose a short-term precipitation forecasting model based on spatio-temporal alignment attention, with SATA as the temporal alignment module and STAU as the spatio-temporal feature extractor to filter high-pass features from precipitation signals and capture multi-term temporal dependencies. Based on satellite and ERA5 data from the southwestern region of China, our model achieves improvements of 12.61\% in terms of RMSE, in comparison with the state-of-the-art methods.


Enhancing Cloud-Native Resource Allocation with Probabilistic Forecasting Techniques in O-RAN

Kasuluru, Vaishnavi, Blanco, Luis, Zeydan, Engin, Bel, Albert, Antonopoulos, Angelos

arXiv.org Artificial Intelligence

The need for intelligent and efficient resource provisioning for the productive management of resources in real-world scenarios is growing with the evolution of telecommunications towards the 6G era. Technologies such as Open Radio Access Network (O-RAN) can help to build interoperable solutions for the management of complex systems. Probabilistic forecasting, in contrast to deterministic single-point estimators, can offer a different approach to resource allocation by quantifying the uncertainty of the generated predictions. This paper examines the cloud-native aspects of O-RAN together with the radio App (rApp) deployment options. The integration of probabilistic forecasting techniques as a rApp in O-RAN is also emphasized, along with case studies of real-world applications. Through a comparative analysis of forecasting models using the error metric, we show the advantages of Deep Autoregressive Recurrent network (DeepAR) over other deterministic probabilistic estimators. Furthermore, the simplicity of Simple-Feed-Forward (SFF) leads to a fast runtime but does not capture the temporal dependencies of the input data. Finally, we present some aspects related to the practical applicability of cloud-native O-RAN with probabilistic forecasting.


Learnability, Sample Complexity, and Hypothesis Class Complexity for Regression Models

Beheshti, Soosan, Shamsi, Mahdi

arXiv.org Artificial Intelligence

The goal of a learning algorithm is to receive a training data set as input and provide a hypothesis that can generalize to all possible data points from a domain set. The hypothesis is chosen from hypothesis classes with potentially different complexities. Linear regression modeling is an important category of learning algorithms. The practical uncertainty of the target samples affects the generalization performance of the learned model. Failing to choose a proper model or hypothesis class can lead to serious issues such as underfitting or overfitting. These issues have been addressed by alternating cost functions or by utilizing cross-validation methods. These approaches can introduce new hyperparameters with their own new challenges and uncertainties or increase the computational complexity of the learning algorithm. On the other hand, the theory of probably approximately correct (PAC) aims at defining learnability based on probabilistic settings. Despite its theoretical value, PAC does not address practical learning issues on many occasions. This work is inspired by the foundation of PAC and is motivated by the existing regression learning issues. The proposed approach, denoted by epsilon-Confidence Approximately Correct (epsilon CoAC), utilizes Kullback Leibler divergence (relative entropy) and proposes a new related typical set in the set of hyperparameters to tackle the learnability issue. Moreover, it enables the learner to compare hypothesis classes of different complexity orders and choose among them the optimum with the minimum epsilon in the epsilon CoAC framework. Not only the epsilon CoAC learnability overcomes the issues of overfitting and underfitting, but it also shows advantages and superiority over the well known cross-validation method in the sense of time consumption as well as in the sense of accuracy.


A Transformer-based deep neural network model for SSVEP classification

Chen, Jianbo, Zhang, Yangsong, Pan, Yudong, Xu, Peng, Guan, Cuntai

arXiv.org Artificial Intelligence

Steady-state visual evoked potential (SSVEP) is one of the most commonly used control signal in the brain-computer interface (BCI) systems. However, the conventional spatial filtering methods for SSVEP classification highly depend on the subject-specific calibration data. The need for the methods that can alleviate the demand for the calibration data become urgent. In recent years, developing the methods that can work in inter-subject classification scenario has become a promising new direction. As the popular deep learning model nowadays, Transformer has excellent performance and has been used in EEG signal classification tasks. Therefore, in this study, we propose a deep learning model for SSVEP classification based on Transformer structure in inter-subject classification scenario, termed as SSVEPformer, which is the first application of the transformer to the classification of SSVEP. Inspired by previous studies, the model adopts the frequency spectrum of SSVEP data as input, and explores the spectral and spatial domain information for classification. Furthermore, to fully utilize the harmonic information, an extended SSVEPformer based on the filter bank technology (FB-SSVEPformer) is proposed to further improve the classification performance. Experiments were conducted using two open datasets (Dataset 1: 10 subjects, 12-class task; Dataset 2: 35 subjects, 40-class task) in the inter-subject classification scenario. The experimental results show that the proposed models could achieve better results in terms of classification accuracy and information transfer rate, compared with other baseline methods. The proposed model validates the feasibility of deep learning models based on Transformer structure for SSVEP classification task, and could serve as a potential model to alleviate the calibration procedure in the practical application of SSVEP-based BCI systems.


JAXFit: Trust Region Method for Nonlinear Least-Squares Curve Fitting on the GPU

Hofer, Lucas R., Krstajić, Milan, Smith, Robert P.

arXiv.org Artificial Intelligence

We implement a trust region method on the GPU for nonlinear least squares curve fitting problems using a new deep learning Python library called JAX. Our open source package, JAXFit, works for both unconstrained and constrained curve fitting problems and allows the fit functions to be defined in Python alone -- without any specialized knowledge of either the GPU or CUDA programming. Since JAXFit runs on the GPU, it is much faster than CPU based libraries and even other GPU based libraries, despite being very easy to use. Additionally, due to JAX's deep learning foundations, the Jacobian in JAXFit's trust region algorithm is calculated with automatic differentiation, rather than than using derivative approximations or requiring the user to define the fit function's partial derivatives.